Purpose, Process, Product

This group assignment provides practice in foreign exchange markets as well as R models of those markets. Specifically we will practice reading in data, exploring time series, estimating auto and cross correlations, and investigating volatility clustering in financial time series. We will summarize our experiences in debrief. We will pay special attention to the financial economics of exchange rates.

Assignment

This assignment will span Live Sessions 3 and 4 (two weeks). Project 2 is due before Live Session 5. Submit into Coursework > Assignments and Grading > Project 2 > Submission an RMD file with filename lastname-firstname_Project2.Rmd and a knitted PDF or html file of the same name.

  1. Use headers (##), r-chunks for code, and text to build a report that addresses the two parts of this project.

  2. List in the text the ‘R’ skills needed to complete this project.

  3. Explain each of the functions (e.g., ggplot()) used to compute and visualize results.

  4. Discuss how well did the results begin to answer the business questions posed at the beginning of each part of the project.

Part 1

In this set we will build and explore a data set using filters and if and diff statements. We will then answer some questions using plots and a pivot table report. We will then review a function to house our approach in case we would like to run some of the same analysis on other data sets.

Problem

Marketing and accounts receivables managers at our company continue to note we have a significant exposure to exchange rates. Our functional currency (what we report in financial statements) is in U.S. dollars (USD).

  • Our customer base is located in the United Kingdom, across the European Union, and in Japan. The exposure hits the gross revenue line of our financials.

  • Cash flow is further affected by the ebb and flow of accounts receivable components of working capital in producing and selling several products. When exchange rates are volatile, so is earnings, and more importantly, our cash flow.

  • Our company has also missed earnings forecasts for five straight quarters.

To get a handle on exchange rate exposures we download this data set and review some basic aspects of the exchange rates.

# Read in data
library(zoo)
library(xts)
library(ggplot2)
# Read and review a csv file from FRED
exrates <- na.omit(read.csv("data/exrates.csv", header = TRUE))
# Check the data
head(exrates)
##        DATE  USD.EUR  USD.GBP USD.CNY USD.JPY
## 1 1/29/2012 0.763678 0.638932 6.29509 77.1840
## 2  2/5/2012 0.760684 0.633509 6.29429 76.3930
## 3 2/12/2012 0.757491 0.632759 6.29232 77.2049
## 4 2/19/2012 0.760889 0.634166 6.29644 78.7109
## 5 2/26/2012 0.750301 0.632641 6.29710 80.3373
## 6  3/4/2012 0.750474 0.629771 6.29873 81.1607
tail(exrates)
##           DATE  USD.EUR  USD.GBP USD.CNY USD.JPY
## 255 12/11/2016 0.938872 0.791554 6.93141 114.397
## 256 12/18/2016 0.950478 0.796572 6.93042 116.796
## 257 12/25/2016 0.958288 0.810481 6.94908 117.469
## 258   1/1/2017 0.954067 0.813594 6.94929 117.100
## 259   1/8/2017 0.951493 0.812388 6.92820 116.968
## 260  1/15/2017 0.943352 0.820854 6.91781 115.287
str(exrates)
## 'data.frame':    260 obs. of  5 variables:
##  $ DATE   : Factor w/ 260 levels "1/1/2017","1/10/2016",..: 15 103 89 94 100 126 109 114 120 130 ...
##  $ USD.EUR: num  0.764 0.761 0.757 0.761 0.75 ...
##  $ USD.GBP: num  0.639 0.634 0.633 0.634 0.633 ...
##  $ USD.CNY: num  6.3 6.29 6.29 6.3 6.3 ...
##  $ USD.JPY: num  77.2 76.4 77.2 78.7 80.3 ...
# Begin to explore the data
summary(exrates)
##         DATE        USD.EUR          USD.GBP          USD.CNY     
##  1/1/2017 :  1   Min.   :0.7199   Min.   :0.5835   Min.   :6.092  
##  1/10/2016:  1   1st Qu.:0.7544   1st Qu.:0.6224   1st Qu.:6.149  
##  1/11/2015:  1   Median :0.7926   Median :0.6418   Median :6.279  
##  1/12/2014:  1   Mean   :0.8196   Mean   :0.6561   Mean   :6.310  
##  1/13/2013:  1   3rd Qu.:0.8932   3rd Qu.:0.6656   3rd Qu.:6.369  
##  1/15/2017:  1   Max.   :0.9583   Max.   :0.8209   Max.   :6.949  
##  (Other)  :254                                                    
##     USD.JPY      
##  Min.   : 76.39  
##  1st Qu.: 96.90  
##  Median :102.44  
##  Mean   :103.05  
##  3rd Qu.:117.19  
##  Max.   :124.78  
## 

Questions

  1. What is the nature of exchange rates in general and in particular for this data set? We want to reflect the ups and downs of rate movements, known to managers as currency appreciation and depreciation.
  • We will calculate percentage changes as log returns of currency pairs. Our interest is in the ups and downs. To look at that we use if and else statements to define a new column called direction. We will build a data frame to house this initial analysis.

  • Using this data frame, interpret appreciation and depreciation in terms of the impact on the receipt of cash flow from customer’s accounts that are denominated in other than our USD functional currency.

# Compute log differences percent using as.matrix to force numeric type
exrates.r <- diff(log(as.matrix(exrates[, -1]))) * 100
head(exrates.r)
##       USD.EUR    USD.GBP     USD.CNY    USD.JPY
## 2 -0.39282058 -0.8523826 -0.01270912 -1.0301113
## 3 -0.42063724 -0.1184583 -0.03130311  1.0571858
## 4  0.44758304  0.2221127  0.06545522  1.9318720
## 5 -1.40130272 -0.2407629  0.01048156  2.0452375
## 6  0.02305476 -0.4546859  0.02588158  1.0197119
## 7  1.19869144  0.7988383  0.22598055  0.6384155
tail(exrates.r)
##        USD.EUR    USD.GBP      USD.CNY    USD.JPY
## 255 -0.1234763 -0.4555311  0.602265010  0.9803397
## 256  1.2285861  0.6319419 -0.014283828  2.0753968
## 257  0.8183343  1.7310378  0.268885929  0.5745646
## 258 -0.4414460  0.3833571  0.003021937 -0.3146198
## 259 -0.2701570 -0.1483412 -0.303945688 -0.1127877
## 260 -0.8592840  1.0367203 -0.150079365 -1.4475722
str(exrates.r)
##  num [1:259, 1:4] -0.3928 -0.4206 0.4476 -1.4013 0.0231 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : chr [1:259] "2" "3" "4" "5" ...
##   ..$ : chr [1:4] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY"
# Create size and direction
size <- na.omit(abs(exrates.r)) # size is indicator of volatility
head(size)
##      USD.EUR   USD.GBP    USD.CNY   USD.JPY
## 2 0.39282058 0.8523826 0.01270912 1.0301113
## 3 0.42063724 0.1184583 0.03130311 1.0571858
## 4 0.44758304 0.2221127 0.06545522 1.9318720
## 5 1.40130272 0.2407629 0.01048156 2.0452375
## 6 0.02305476 0.4546859 0.02588158 1.0197119
## 7 1.19869144 0.7988383 0.22598055 0.6384155
# colnames(size) <- paste(colnames(size),".size", sep = "") # Teetor
direction <- ifelse(exrates.r > 0, 1, ifelse(exrates.r < 0, -1, 0)) # another indicator of volatility
# colnames(direction) <- paste(colnames(direction),".dir", sep = "")
head(direction)
##   USD.EUR USD.GBP USD.CNY USD.JPY
## 2      -1      -1      -1      -1
## 3      -1      -1      -1       1
## 4       1       1       1       1
## 5      -1      -1       1       1
## 6       1      -1       1       1
## 7       1       1       1       1
# Convert into a time series object: 
# 1. Split into date and rates
dates <- as.Date(exrates$DATE[-1], "%m/%d/%Y")
values <- cbind(exrates.r, size, direction)
# for dplyr pivoting we need a data frame
exrates.df <- data.frame(dates = dates, returns = exrates.r, size = size, direction = direction)
str(exrates.df) # notice the returns.* and direction.* prefixes
## 'data.frame':    259 obs. of  13 variables:
##  $ dates            : Date, format: "2012-02-05" "2012-02-12" ...
##  $ returns.USD.EUR  : num  -0.3928 -0.4206 0.4476 -1.4013 0.0231 ...
##  $ returns.USD.GBP  : num  -0.852 -0.118 0.222 -0.241 -0.455 ...
##  $ returns.USD.CNY  : num  -0.0127 -0.0313 0.0655 0.0105 0.0259 ...
##  $ returns.USD.JPY  : num  -1.03 1.06 1.93 2.05 1.02 ...
##  $ size.USD.EUR     : num  0.3928 0.4206 0.4476 1.4013 0.0231 ...
##  $ size.USD.GBP     : num  0.852 0.118 0.222 0.241 0.455 ...
##  $ size.USD.CNY     : num  0.0127 0.0313 0.0655 0.0105 0.0259 ...
##  $ size.USD.JPY     : num  1.03 1.06 1.93 2.05 1.02 ...
##  $ direction.USD.EUR: num  -1 -1 1 -1 1 1 1 -1 -1 1 ...
##  $ direction.USD.GBP: num  -1 -1 1 -1 -1 1 1 -1 -1 1 ...
##  $ direction.USD.CNY: num  -1 -1 1 1 1 1 1 -1 -1 -1 ...
##  $ direction.USD.JPY: num  -1 1 1 1 1 1 1 -1 -1 -1 ...
# 2. Make an xts object with row names equal to the dates
exrates.xts <- na.omit(as.xts(values, dates)) #order.by=as.Date(dates, "%d/%m/%Y")))
str(exrates.xts)
## An 'xts' object on 2012-02-05/2017-01-15 containing:
##   Data: num [1:259, 1:12] -0.3928 -0.4206 0.4476 -1.4013 0.0231 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : NULL
##   ..$ : chr [1:12] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY" ...
##   Indexed by objects of class: [Date] TZ: UTC
##   xts Attributes:  
##  NULL
exrates.zr <- na.omit(as.zooreg(exrates.xts))
str(exrates.zr)
## 'zooreg' series from 2012-02-05 to 2017-01-15
##   Data: num [1:259, 1:12] -0.3928 -0.4206 0.4476 -1.4013 0.0231 ...
##  - attr(*, "dimnames")=List of 2
##   ..$ : NULL
##   ..$ : chr [1:12] "USD.EUR" "USD.GBP" "USD.CNY" "USD.JPY" ...
##   Index:  Date[1:259], format: "2012-02-05" "2012-02-12" "2012-02-19" "2012-02-26" "2012-03-04" ...
##   Frequency: 0.142857142857143
head(exrates.xts)
##                USD.EUR    USD.GBP     USD.CNY    USD.JPY    USD.EUR
## 2012-02-05 -0.39282058 -0.8523826 -0.01270912 -1.0301113 0.39282058
## 2012-02-12 -0.42063724 -0.1184583 -0.03130311  1.0571858 0.42063724
## 2012-02-19  0.44758304  0.2221127  0.06545522  1.9318720 0.44758304
## 2012-02-26 -1.40130272 -0.2407629  0.01048156  2.0452375 1.40130272
## 2012-03-04  0.02305476 -0.4546859  0.02588158  1.0197119 0.02305476
## 2012-03-11  1.19869144  0.7988383  0.22598055  0.6384155 1.19869144
##              USD.GBP    USD.CNY   USD.JPY USD.EUR USD.GBP USD.CNY USD.JPY
## 2012-02-05 0.8523826 0.01270912 1.0301113      -1      -1      -1      -1
## 2012-02-12 0.1184583 0.03130311 1.0571858      -1      -1      -1       1
## 2012-02-19 0.2221127 0.06545522 1.9318720       1       1       1       1
## 2012-02-26 0.2407629 0.01048156 2.0452375      -1      -1       1       1
## 2012-03-04 0.4546859 0.02588158 1.0197119       1      -1       1       1
## 2012-03-11 0.7988383 0.22598055 0.6384155       1       1       1       1

We can plot with the ggplot2 package. In the ggplot statements we use aes, “aesthetics”, to pick x (horizontal) and y (vertical) axes. Use group =1 to ensure that all data is plotted. The added (+) geom_line is the geometrical method that builds the line plot.

library(ggplot2)
library(plotly)
title.chg <- "Exchange Rate Percent Changes"
p1 <- autoplot.zoo(exrates.xts[,1:4]) + ggtitle(title.chg) + ylim(-5, 5)
p2 <- autoplot.zoo(exrates.xts[,5:8]) + ggtitle(title.chg) + ylim(-5, 5)
ggplotly(p1)
  1. Let’s dig deeper and compute mean, standard deviation, etc. Load the data_moments() function. Run the function using the exrates data and write a knitr::kable() report.
acf(coredata(exrates.xts[ , 1:4])) # returns

acf(coredata(exrates.xts[ , 5:8])) # sizes

pacf(coredata(exrates.xts[ , 1:4])) # returns

pacf(coredata(exrates.xts[ , 5:8])) # sizes

# Load the data_moments() function
## data_moments function
## INPUTS: r vector
## OUTPUTS: list of scalars (mean, sd, median, skewness, kurtosis)
data_moments <- function(data){
  library(moments)
  library(matrixStats)
  mean.r <- colMeans(data)
  median.r <- colMedians(data)
  sd.r <- colSds(data)
  IQR.r <- colIQRs(data)
  skewness.r <- skewness(data)
  kurtosis.r <- kurtosis(data)
  result <- data.frame(mean = mean.r, median = median.r, std_dev = sd.r, IQR = IQR.r, skewness = skewness.r, kurtosis = kurtosis.r)
  return(result)
}
# Run data_moments()
answer <- data_moments(exrates.xts[, 5:8])
# Build pretty table
answer <- round(answer, 4)
knitr::kable(answer)
mean median std_dev IQR skewness kurtosis
USD.EUR 0.7185 0.5895 0.5499 0.7506 1.3773 6.3808
USD.GBP 0.6884 0.5601 0.6565 0.6588 4.0555 34.3779
USD.CNY 0.1700 0.1118 0.2233 0.1536 4.9157 41.4959
USD.JPY 0.8310 0.6358 0.7371 0.8352 1.6373 6.3185
mean(exrates.xts[,4])
## [1] 0.154916

Part 2

We will use the data from the first part to investigate the interactions of the distribution of exchange rates.

Problem

We want to characterize the distribution of up and down movements visually. Also we would like to repeat the analysis periodically for inclusion in management reports.

Questions

  1. How can we show the shape of our exposure to euros, especially given our tolerance for risk? Suppose corporate policy set tolerance at 95%. Let’s use the exrates.df data frame with ggplot2 and the cumulative relative frequency function stat_ecdf.
exrates.tol.pct <- 0.95
exrates.tol <- quantile(exrates.df$returns.USD.EUR, exrates.tol.pct)
exrates.tol.label <- paste("Tolerable Rate = ", round(exrates.tol, 2), "%", sep = "")
p <- ggplot(exrates.df, aes(returns.USD.EUR, fill = direction.USD.EUR)) + stat_ecdf(colour = "blue", size = 0.75, geom = "point") + geom_vline(xintercept = exrates.tol, colour = "red", size = 1.5) + annotate("text", x = exrates.tol + 1 , y = 0.75, label = exrates.tol.label, colour = "darkred")
ggplotly(p)
  1. What is the history of correlations in the exchange rate markets? If this is a “history,” then we have to manage the risk that conducting business in one country will definitely affect business in another. Further that bad things will be followed by more bad things more often than good things. We will create a rolling correlation function, corr_rolling, and embed this function into the rollapply() function (look this one up!).
one <- ts(exrates.df$returns.USD.EUR)
two <- ts(exrates.df$returns.USD.GBP)
# or
one <- ts(exrates.zr[,1])
two <- ts(exrates.zr[,2])
ccf(abs(one), abs(two), main = "GBP vs. EUR", lag.max = 20, xlab = "", ylab = "", ci.col = "red")

# build function to repeat these routines
run_ccf <- function(one, two, main = "one vs. two", lag = 20, color = "red"){
  # one and two are equal length series
  # main is title
  # lag is number of lags in cross-correlation
  # color is color of dashed confidence interval bounds
  stopifnot(length(one) == length(two))
  one <- ts(one)
  two <- ts(two)
  main <- main
  lag <- lag
  color <- color
  ccf(one, two, main = main, lag.max = lag, xlab = "", ylab = "", ci.col = color)
  #end run_ccf
}
one <- ts(exrates.df$returns.USD.EUR)
two <- ts(exrates.df$returns.USD.GBP)
# or
one <- exrates.zr[,1]
two <- exrates.zr[,2]
title <- "EUR vs. GBP"
run_ccf(abs(one), abs(two), main = title, lag = 20, color = "red")

# now for volatility (sizes)
one <- ts(abs(exrates.zr[,1]))
two <- ts(abs(exrates.zr[,2]))
title <- "EUR vs. GBP: volatility"
run_ccf(one, two, main = title, lag = 20, color = "red")

# We see some small raw correlations across time with raw returns. More revealing, we see volatility of correlation clustering using return sizes. 

One more experiment, rolling correlations and volatilities using these functions:

corr_rolling <- function(x) {   
  dim <- ncol(x)    
  corr_r <- cor(x)[lower.tri(diag(dim), diag = FALSE)]  
  return(corr_r)    
}
vol_rolling <- function(x){
  library(matrixStats)
  vol_r <- colSds(x)
  return(vol_r)
}
ALL.r <- exrates.xts[, 1:4]
window <- 90 #reactive({input$window})
corr_r <- rollapply(ALL.r, width = window, corr_rolling, align = "right", by.column = FALSE)
colnames(corr_r) <- c("EUR.GBP", "EUR.CNY", "EUR.JPY", "GBP.CNY", "GBP.JPY", "CNY.JPY")
vol_r <- rollapply(ALL.r, width = window, vol_rolling, align = "right", by.column = FALSE)
colnames(vol_r) <- c("EUR.vol", "GBP.vol", "CNY.vol", "JPY.vol")
year <- format(index(corr_r), "%Y")
r_corr_vol <- merge(ALL.r, corr_r, vol_r, year)
  1. How related are correlations and volatilities? Put another way, do we have to be concerned that inter-market transactions (e.g., customers and vendors transacting in more than one currency) can affect transactions in a single market? Let’s model the the exrate data to understand how correlations and volatilities depend upon one another.
library(quantreg)
taus <- seq(.05,.95, .05)   # Roger Koenker UIC Bob Hogg and Allen Craig
fit.rq.CNY.JPY <- rq(log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)    
fit.lm.CNY.JPY <- lm(log(CNY.JPY) ~ log(JPY.vol), data = r_corr_vol)    
# Some test statements  
CNY.JPY.summary <- summary(fit.rq.CNY.JPY, se = "boot")
CNY.JPY.summary
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.05
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -4.21770   0.21233  -19.86393   0.00000
## log(JPY.vol)  -5.54144   1.71291   -3.23511   0.00148
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.1
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.96984   0.12880  -30.82232   0.00000
## log(JPY.vol)  -4.05415   1.25665   -3.22615   0.00152
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.15
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.65544   0.14950  -24.45141   0.00000
## log(JPY.vol)  -5.04776   1.33356   -3.78518   0.00022
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.2
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.41946   0.12234  -27.95059   0.00000
## log(JPY.vol)  -3.90841   1.26108   -3.09926   0.00229
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.25
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.24424   0.14203  -22.84124   0.00000
## log(JPY.vol)  -2.70119   1.30507   -2.06976   0.04009
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.3
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -3.00646   0.13037  -23.06126   0.00000
## log(JPY.vol)  -1.16454   1.13019   -1.03040   0.30439
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.35
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.89360   0.08109  -35.68283   0.00000
## log(JPY.vol)  -0.68706   0.92434   -0.74330   0.45840
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.4
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.85405   0.06685  -42.69551   0.00000
## log(JPY.vol)  -0.52545   1.02010   -0.51510   0.60720
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.45
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.78600   0.08569  -32.51264   0.00000
## log(JPY.vol)  -0.88196   1.22176   -0.72187   0.47143
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.5
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.72762   0.10752  -25.36873   0.00000
## log(JPY.vol)  -0.67233   1.60970   -0.41767   0.67675
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.55
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.53524   0.11579  -21.89604   0.00000
## log(JPY.vol)  -1.64576   1.85245   -0.88842   0.37565
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.6
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.43784   0.12764  -19.09946   0.00000
## log(JPY.vol)  -1.85451   1.88906   -0.98171   0.32774
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.65
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.19872   0.13714  -16.03208   0.00000
## log(JPY.vol)  -2.84742   1.46025   -1.94995   0.05294
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.7
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -2.14073   0.12407  -17.25366   0.00000
## log(JPY.vol)  -2.65416   1.12242   -2.36467   0.01925
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.75
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.97726   0.12393  -15.95450   0.00000
## log(JPY.vol)  -1.89488   0.79811   -2.37421   0.01878
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.8
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.75658   0.14727  -11.92729   0.00000
## log(JPY.vol)  -0.67468   0.95339   -0.70766   0.48019
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.85
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.61049   0.08730  -18.44838   0.00000
## log(JPY.vol)   0.06474   0.52308    0.12377   0.90166
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.9
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.57462   0.03124  -50.39642   0.00000
## log(JPY.vol)   0.06146   0.14635    0.41996   0.67509
## 
## Call: rq(formula = log(CNY.JPY) ~ log(JPY.vol), tau = taus, data = r_corr_vol)
## 
## tau: [1] 0.95
## 
## Coefficients:
##              Value     Std. Error t value   Pr(>|t|) 
## (Intercept)   -1.45763   0.07276  -20.03321   0.00000
## log(JPY.vol)   0.45428   0.50455    0.90037   0.36928
plot(CNY.JPY.summary)

#

Here is the quantile regression part of the package.

  1. We set taus as the quantiles of interest.
  2. We run the quantile regression using the quantreg package and a call to the rq function.
  3. We can overlay the quantile regression results onto the standard linear model regression.
  4. We can sensitize our analysis with the range of upper and lower bounds on the parameter estimates of the relationship between correlation and volatility.
  5. The log()-log() transformation allows us to interpret the regression coefficients as elasticities, which vary with the quantile. The larger the elasticity, especially if the absolute value is greater than one, the more risk dependence one market has on the other.
  6. The risk relationships can also be viewed year by year. Here we see very different patterns
  7. \(y = a + bx + e\) is interpreted as systematic movements in \(y = a + bx\), while unsystematic movements are simply \(e\).

Animation

library(quantreg)
library(magick)
img <- image_graph(res = 96)
datalist <- split(r_corr_vol, r_corr_vol$year)
out <- lapply(datalist, function(data){
  p <- ggplot(data, aes(JPY.vol, CNY.JPY)) +
    geom_point() + 
    ggtitle(data$year) + 
    geom_quantile(quantiles = c(0.05, 0.95)) + 
    geom_quantile(quantiles = 0.5, linetype = "longdash") +
    geom_density_2d(colour = "red")  
  print(p)
})
while (!is.null(dev.list()))  dev.off()
#img <- image_background(image_trim(img), 'white')
animation <- image_animate(img, fps = .5)
animation   

Attempt interpretations to help managers understand the way market interactions affect accounts receivables.

Notes on lead and lag

In the ccf() function we get results that produce positive and negative lags. A positive lag looks back and a negative lag (a lead) looks forward in the history of a time series. Leading and lagging two different serries, then computing the moments and corelations show a definite asymmetry.

Suppose we lead the USD.EUR return by 5 days and lag the USD.GBP by 5 days. We will compare the correlation in this case with the opposite: lead the USD.GBP return by 5 days and lag the USD.EUR by 5 days. We will use the dplyr package to help us.

library(dplyr)
x <- as.numeric(exrates.df$returns.USD.EUR) # USD.EUR
y <- as.numeric(exrates.df$returns.USD.GBP) # USD.GBP
xy.df <- na.omit(data.frame(date = dates, ahead_x= lead(x, 5), behind_y = lag(y, 5)))
yx.df <- na.omit(data.frame(date = dates, ahead_y =lead(y, 5), behind_x = lag(x, 5)))
answer <- data_moments(na.omit(as.matrix(xy.df[,2:3])))
answer <- round(answer, 4)
knitr::kable(answer)
mean median std_dev IQR skewness kurtosis
ahead_x 0.0872 0.1078 0.905 1.1820 0.1671 3.6297
behind_y 0.0945 -0.0130 0.951 1.1134 1.7055 13.2014
answer <- data_moments(na.omit(as.matrix(yx.df[,2:3])))
answer <- round(answer, 4)
knitr::kable(answer)
mean median std_dev IQR skewness kurtosis
ahead_y 0.1072 -0.0047 0.9593 1.1139 1.6409 12.7057
behind_x 0.0673 0.1043 0.8953 1.1624 0.1285 3.5876
cor(as.numeric(xy.df$ahead_x), as.numeric(xy.df$behind_y))
## [1] 0.0003739413
cor(as.numeric(yx.df$ahead_y), as.numeric(yx.df$behind_x))
## [1] -0.004339494

Leading x, lagging y will produce a negative correlation. The opposite produces an even smaller and positive correlation. Differences in means, etc. are not huge between the two cases, but when combined produce the correlational differences.